Standard language model training employs gold human documents or human-human interaction data, and treats all training data as positive examples. Growing evidence shows that even with very large amounts of positive training data, issues remain that can be alleviated with relatively small amounts of negative data -- examples of what the model should not do. In this work, we propose a novel procedure to train with such data called the CRINGE loss (ContRastive Iterative Negative GEneration). We show the effectiveness of this approach across three different experiments on the tasks of safe generation, contradiction avoidance, and open-domain dialogue. Our models outperform multiple strong baselines and are conceptually simple, easy to train and implement.
translated by 谷歌翻译
Video super-resolution is one of the most popular tasks on mobile devices, being widely used for an automatic improvement of low-bitrate and low-resolution video streams. While numerous solutions have been proposed for this problem, they are usually quite computationally demanding, demonstrating low FPS rates and power efficiency on mobile devices. In this Mobile AI challenge, we address this problem and propose the participants to design an end-to-end real-time video super-resolution solution for mobile NPUs optimized for low energy consumption. The participants were provided with the REDS training dataset containing video sequences for a 4X video upscaling task. The runtime and power efficiency of all models was evaluated on the powerful MediaTek Dimensity 9000 platform with a dedicated AI processing unit capable of accelerating floating-point and quantized neural networks. All proposed solutions are fully compatible with the above NPU, demonstrating an up to 500 FPS rate and 0.2 [Watt / 30 FPS] power consumption. A detailed description of all models developed in the challenge is provided in this paper.
translated by 谷歌翻译
提示将下游应用程序作为语言建模任务施放,与使用预训练的模型进行标准微调相比,已显示出样本有效的效率。但是,提示的一个陷阱是需要手动设计的模式,其结果可能是不直觉的,需要大量的验证集来调整。为了应对挑战,我们提出了一种全自动提示方法Autoseq:(1)我们在序列到序列模型上采用自然语言提示,从而实现自由形式生成和更大的标签搜索空间; (2)我们提出了标签序列 - 无限长度的短语以口头表达标签 - 这消除了手动模板的需求,并且比单个标签单词更具有表现力; (3)我们使用Beam Search自动生成大量的标签序列候选物,并提出对比度重新排列以获得最佳组合。 Autoseq显着胜过其他无手动设计方法,例如软提示调整,适配器调整和自动搜索单个标签单词;生成的标签序列比各种任务上的精选手动序列更好。我们的方法揭示了几次学习中序列模型的潜力,并阐明了通用通用和自动提示的途径。本文的源代码可以从https://github.com/thunlp/seq2seq-prompt获得。
translated by 谷歌翻译
连续的关系提取(CRE)要求该模型不断从课堂收入数据流中学习新关系。在本文中,我们提出了一种令人沮丧的简单但有效的方法(FEA)方法,其中有两个学习阶段的CRE:1)快速适应(FA)仅使用新数据加热模型。 2)平衡调整(BT)列出平衡内存数据上的模型。尽管它很简单,但FEA与最先进的基线相比,FEA取得了可比性(在诱人或优越(在少数情况下)性能。通过仔细的检查,我们发现新关系之间的数据失衡会导致偏斜的决策边界在预计编码器上的头部分类器中,从而损害了整体性能。在FEA中,FA阶段释放了后续填充的内存数据的潜力,而BT阶段有助于建立更平衡的决策边界。通过统一的视图,我们,我们发现可以将两个强大的CRE基准列入提议的培训管道中。FEEA的成功还为CRE中的未来模型设计提供了可行的见解和建议。
translated by 谷歌翻译
机器的图像编码(ICM)旨在压缩图像进行AI任务分析,而不是满足人类的看法。学习一种既是一般(用于AI任务)的特征,也是紧凑的(用于压缩)的功能,这对于其成功而言至关重要。在本文中,我们试图通过学习通用功能,同时考虑压缩来开发ICM框架。我们将诸如无所不能功能和相应框架的功能命名为Omni-ICM。考虑到自我监督学习(SSL)提高了特征的概括,我们将其与压缩任务集成到OMNI-ICM框架中,以学习无所不能的功能。但是,在SSL中协调语义建模并在压缩中删除冗余是不平凡的,因此我们通过合作实例区分和熵最小化以自适应掉落的信息来设计新颖的信息过滤(如果)模块,以较弱相关的信息执行AI任务(例如,某些纹理冗余)。与以前的特定解决方案不同,Omni-ICM可以直接基于学习的无能功能的AI任务分析,而无需联合培训或额外的转换。尽管简单而直观,但Omni-ICM在多个基本愿景任务上大大优于现有的传统和基于学习的编解码器。
translated by 谷歌翻译
时间序列数据的积累和标签的不存在使时间序列异常检测(AD)是自我监督的深度学习任务。基于单拟合的方法只能触及整个正态性的某些方面,不足以检测各种异常。其中,AD采用的对比度学习方法总是选择正常的负面对,这是反对AD任务的目的。现有的基于多促进的方法通常是两阶段的,首先应用了训练过程,其目标可能与AD不同,因此性能受到预训练的表示的限制。本文提出了一种深层对比的单级异常检测方法(COCA),该方法结合了对比度学习和一级分类的正态性假设。关键思想是将表示和重建表示形式视为无阴性对比度学习的积极对,我们将其命名为序列对比。然后,我们应用了由不变性和方差项组成的对比度损失函数,前者同时优化了这两个假设的损失,后者则防止了超晶体崩溃。在四个现实世界中的时间序列数据集上进行的广泛实验表明,所提出的方法的卓越性能达到了最新。该代码可在https://github.com/ruiking04/coca上公开获得。
translated by 谷歌翻译
会话问题应答(CQA)系统旨在为用户提供自然语言答案,以信息寻求对话。现有的CQA基准测试与预先收集的人类谈话进行比较模型,使用在会话历史中提供的地面真理答案。它仍然尚不清楚我们是否可以依赖于模型开发的这种静态评估,以及当前系统是否能够充分地概括为现实世界的人机对话。在这项工作中,我们开展了最先进的CQA系统的大规模人类评估,人类评估人员与模型交谈并判断了答案的正确性。我们发现,人机对话的分布与人类谈话的分配急剧不同,并且在模型排名方面存在人和金历史评估之间的分歧。我们进一步调查了如何改进自动评估,并提出基于预测历史的问题重写机制,与人类判断更好地相关。最后,我们讨论了各种建模策略和未来方向对更好的会话问题应答系统的影响。
translated by 谷歌翻译
This paper presents SimCSE, a simple contrastive learning framework that greatly advances state-of-the-art sentence embeddings. We first describe an unsupervised approach, which takes an input sentence and predicts itself in a contrastive objective, with only standard dropout used as noise. This simple method works surprisingly well, performing on par with previous supervised counterparts. We find that dropout acts as minimal data augmentation, and removing it leads to a representation collapse. Then, we propose a supervised approach, which incorporates annotated pairs from natural language inference datasets into our contrastive learning framework by using "entailment" pairs as positives and "contradiction" pairs as hard negatives. We evaluate SimCSE on standard semantic textual similarity (STS) tasks, and our unsupervised and supervised models using BERT base achieve an average of 76.3% and 81.6% Spearman's correlation respectively, a 4.2% and 2.2% improvement compared to the previous best results. We also show-both theoretically and empirically-that the contrastive learning objective regularizes pre-trained embeddings' anisotropic space to be more uniform, and it better aligns positive pairs when supervised signals are available. 1 2 We randomly sample 10 6 sentences from English Wikipedia and fine-tune BERTbase with learning rate = 3e-5, N = 64. In all our experiments, no STS training sets are used.
translated by 谷歌翻译
The recent GPT-3 model (Brown et al., 2020) achieves remarkable few-shot performance solely by leveraging a natural-language prompt and a few task demonstrations as input context. Inspired by their findings, we study few-shot learning in a more practical scenario, where we use smaller language models for which fine-tuning is computationally efficient. We present LM-BFF-better few-shot fine-tuning of language models 1 -a suite of simple and complementary techniques for finetuning language models on a small number of annotated examples. Our approach includes (1) prompt-based fine-tuning together with a novel pipeline for automating prompt generation; and (2) a refined strategy for dynamically and selectively incorporating demonstrations into each context. Finally, we present a systematic evaluation for analyzing few-shot performance on a range of NLP tasks, including classification and regression. Our experiments demonstrate that our methods combine to dramatically outperform standard fine-tuning procedures in this low resource setting, achieving up to 30% absolute improvement, and 11% on average across all tasks. Our approach makes minimal assumptions on task resources and domain expertise, and hence constitutes a strong task-agnostic method for few-shot learning. 2 * The first two authors contributed equally. 1 Alternatively, language models' best friends forever. 2 Our implementation is publicly available at https:// github.com/princeton-nlp/LM-BFF.
translated by 谷歌翻译
由于现代硬件的计算能力强烈增加,在大规模数据集上学习的预训练的深度学习模型(例如,BERT,GPT-3)已经显示了它们对传统方法的有效性。巨大进展主要促进了变压器及其变体架构的代表能力。在本文中,我们研究了低级计算机视觉任务(例如,去噪,超级分辨率和派没),并开发了一个新的预先训练的模型,即图像处理变压器(IPT)。为了最大限度地挖掘变压器的能力,我们展示了利用众所周知的想象网基准,以产生大量损坏的图像对。 IPT模型在具有多头和多尾的这些图像上培训。此外,引入了对比度学习,以适应不同的图像处理任务。因此,在微调后,预先训练的模型可以有效地在所需的任务上使用。只有一个预先训练的模型,IPT优于当前的最先进方法对各种低级基准。代码可在https://github.com/huawei-noah/pretrate -ipt和https://gitee.com/mindspore/mindspore/tree/master/model_zoo/research/cv/ipt
translated by 谷歌翻译